Skip to content

Adjust initial AI streaming status#4630

Open
hanna-paasivirta wants to merge 2 commits intomainfrom
ai-streaming-statuses
Open

Adjust initial AI streaming status#4630
hanna-paasivirta wants to merge 2 commits intomainfrom
ai-streaming-statuses

Conversation

@hanna-paasivirta
Copy link
Copy Markdown

Description

This PR changes the initial progress status shown to the user when starting a conversation with the AI assistants from "Generating response..." to "Thinking..." . This change is required because we are adding a more detailed stream of status updates from Apollo, where "Generating response" would feel out of place. OpenFn/apollo#452

Validation steps

  1. Start a conversation with any AI assistant. Check that the first status that gets shown is "Thinking..."

AI Usage

Please disclose whether you've used AI anywhere in this PR (it's cool, we just
want to know!):

  • I have used Claude Code
  • I have used another model
  • I have not used AI

You can read more details in our
Responsible AI Policy

Pre-submission checklist

  • I have performed an AI review of my code (we recommend using /review
    with Claude Code)
  • I have implemented and tested all related authorization policies.
    (e.g., :owner, :admin, :editor, :viewer)
  • I have updated the changelog.
  • I have ticked a box in "AI usage" in this PR

@github-project-automation github-project-automation Bot moved this to New Issues in Core Apr 16, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 16, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 89.57%. Comparing base (2f1a43e) to head (03a4471).
⚠️ Report is 16 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4630      +/-   ##
==========================================
- Coverage   89.61%   89.57%   -0.04%     
==========================================
  Files         444      444              
  Lines       21505    21505              
==========================================
- Hits        19272    19264       -8     
- Misses       2233     2241       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@josephjclark
Copy link
Copy Markdown
Collaborator

Should we cycle it, rather than using a static value?

How long does it take to get the first response back from apollo?

Should we say Calling assistant or something to imply "we're uploading your question"? Because it's not really thinking yet, it's just taking the input and preparing to think.

Maybe that doesn't matter much but "Thinking" is a) boring b) not very insightful. So I'm just wondering if there's something cheap we can use instead

@hanna-paasivirta
Copy link
Copy Markdown
Author

@josephjclark I think it's more frustrating to wait for a system to connect than to wait for it to do something properly, so I'm not sure about "calling assistant".

What if we took the opportunity to reinforce the voice of the assistant here with a more playful pool like:

  • Warming up
  • Cracking knuckles
  • Rolling up sleeves
  • Gathering thoughts
  • Getting oriented
  • Reading the room
  • Taking a breath
  • Sharpening the pencil
  • Opening the toolbox
  • Checking the toolbox
  • Putting the kettle on
  • Laying out the tools
  • Unrolling the blueprints
  • Doing the boring-but-important part
  • Looking before leaping
  • Doing the plumbing
  • Getting on with it
  • Doing the quiet work
  • Checking the pipes
  • Laying the groundwork
  • Pouring the foundation
  • Wiring it up
  • Greasing the gears
  • Oiling the hinges

@josephjclark
Copy link
Copy Markdown
Collaborator

@hanna-paasivirta those are great fun, and in keeping with the OpenFn corporate voice. But I also don't think I want the AI assistant to have that same voice? It's too fun, and I know I'm a grumpy old man but I don't want it to be fun.

This also raises a wider point about what we want the assistant's voice to be like, for which we'll have to canvas more opinions.

For now (as we just discussed) let's have a simple pool like Reading the question... thinking about the question...

@hanna-paasivirta
Copy link
Copy Markdown
Author

It appears for a bit too long for "reading" to make sense (it makes me think why does it read slower than me). I've gone with "Thinking about the question...", "Working on it...", "Processing your request...", "Examining your question...", "Taking a look...", and "Looking into it..." They don't make 100% sense in all situations but I think they're adequate.

@github-actions
Copy link
Copy Markdown

Security Review

⚠️ The review completed but no findings comment was posted.

See the workflow run for the raw Claude output.

Comment on lines +430 to +434
const loadingStatus = useMemo(
() =>
LOADING_STATUSES[Math.floor(Math.random() * LOADING_STATUSES.length)],
[isLoading]
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @hanna-paasivirta this is a very interesting one. It caught my eyes and I went to the React docs of useMemo and saw that the factory "should be a pure function" and that React reserves the right to throw away cached values (useMemo caveats). Math.random() inside the factory breaks that contract, and in Strict Mode dev the factory runs twice per render too, which muddies the intent a bit. It works fine today, but it's leaning on behavior React documents as non-guaranteed. I think a cleaner shape uses the "store information from previous renders" pattern from the useState docs:

const pickStatus = () => LOADING_STATUSES[Math.floor(Math.random() * LOADING_STATUSES.length)];

const [loadingStatus, setLoadingStatus] = useState(pickStatus);
const wasLoading = useRef(isLoading);

if (isLoading && !wasLoading.current) {
  setLoadingStatus(pickStatus());
}
wasLoading.current = isLoading;

That should provide the same behavior, new random phrase each time loading starts and stable within a session, but the randomness lives in useState where impurity is fine, and the trigger is an explicit false -> true transition.

I don't think this is a blocker tho. I just learnt something and wanted to share and maybe it could help make the code better

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I second this - it's a weird usage of useMemo.

@elias-ba what if we did this on sendMessage - when the message is sent, set the streaming status to one of these random messages. No need for a transient local state now, and we guarantee a new one will be picked each time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: New Issues

Development

Successfully merging this pull request may close these issues.

3 participants